Goto

Collaborating Authors

 unsupervised training



From voxels to pixels and back: Self-supervision in natural-image reconstruction from fMRI

Roman Beliy, Guy Gaziv, Assaf Hoogi, Francesca Strappini, Tal Golan, Michal Irani

Neural Information Processing Systems

Developing amethod forhigh-quality reconstruction ofseenimages fromthecorresponding brain activity is an important milestone towards decoding the contents of dreams and mental imagery (Fig 1a). In this task, one attempts to solve for the mapping between fMRI recordings and their corresponding natural images, using many "labeled"{Image, fMRI} pairs (i.e., images and their corresponding fMRIresponses).




Fast Equivariant Imaging: Acceleration for Unsupervised Learning via Augmented Lagrangian and Auxiliary PnP Denoisers

Xu, Guixian, Li, Jinglai, Tang, Junqi

arXiv.org Artificial Intelligence

In this work, we propose Fast Equivariant Imaging (FEI), a novel unsupervised learning framework to rapidly and efficiently train deep imaging networks without ground-truth data. From the perspective of reformulating the Equivariant Imaging based optimization problem via the method of Lagrange multipliers and utilizing plug-and-play denoisers, this novel unsupervised scheme shows superior efficiency and performance compared to the vanilla Equivariant Imaging paradigm. In particular, our FEI schemes achieve an order-of-magnitude (10x) acceleration over standard EI on training U-Net for X-ray CT reconstruction and image inpainting, with improved generalization performance.





Unsupervised Training of Vision Transformers with Synthetic Negatives

Giakoumoglou, Nikolaos, Floros, Andreas, Papadopoulos, Kleanthis Marios, Stathaki, Tania

arXiv.org Artificial Intelligence

This paper does not introduce a novel method per se. Instead, we address the neglected potential of hard negative samples in self-supervised learning. Previous works explored synthetic hard negatives but rarely in the context of vision transformers. We build on this observation and integrate synthetic hard negatives to improve vision transformer representation learning. This simple yet effective technique notably improves the discriminative power of learned representations. Our experiments show performance improvements for both DeiT-S and Swin-T architectures.